7 research outputs found
Evolving Inborn Knowledge For Fast Adaptation in Dynamic POMDP Problems
Rapid online adaptation to changing tasks is an important problem in machine
learning and, recently, a focus of meta-reinforcement learning. However,
reinforcement learning (RL) algorithms struggle in POMDP environments because
the state of the system, essential in a RL framework, is not always visible.
Additionally, hand-designed meta-RL architectures may not include suitable
computational structures for specific learning problems. The evolution of
online learning mechanisms, on the contrary, has the ability to incorporate
learning strategies into an agent that can (i) evolve memory when required and
(ii) optimize adaptation speed to specific online learning problems. In this
paper, we exploit the highly adaptive nature of neuromodulated neural networks
to evolve a controller that uses the latent space of an autoencoder in a POMDP.
The analysis of the evolved networks reveals the ability of the proposed
algorithm to acquire inborn knowledge in a variety of aspects such as the
detection of cues that reveal implicit rewards, and the ability to evolve
location neurons that help with navigation. The integration of inborn knowledge
and online plasticity enabled fast adaptation and better performance in
comparison to some non-evolutionary meta-reinforcement learning algorithms. The
algorithm proved also to succeed in the 3D gaming environment Malmo Minecraft.Comment: 9 pages. Accepted as a full paper in the Genetic and Evolutionary
Computation Conference (GECCO 2020
Recommended from our members
Learning and Analyzing Representations for Meta-Learning and Control
While artificial learning agents have demonstrated impressive capabilities, these successes are typically realized in narrowly defined problems and require large amounts of labeled data. Our agents struggle to leverage what they already know to generalize to new inputs and acquire new skills quickly, abilities quite natural to humans. To learn and leverage the structure present in the world, we study data-driven abstractions of states and tasks. We begin with unsupervised state representation learning, in which the goal is to learn a compact state representation that discards irrelevant information but preserves the information needed to learn the optimal policy. Surprisingly, we find that several commonly used objectives are not guaranteed to produce sufficient representations, and demonstrate that our theoretical findings are are reflected empirically in simple visual RL domains.Next, we turn to learning abstractions of tasks, a problem typically studied as meta-learning. Meta-learning is an approach to endow artificial agents with this capability that leverages a set of related training tasks to learn an adaptation mechanism that can be used to acquire new skills from little supervision. We adopt an inference perspective that casts meta-learning as learning probabilistic task representations, framing the problem of learning to learn as learning to infer hidden task variables from experience. Leveraging this viewpoint, we propose meta-learning algorithms for diverse applications: image segmentation, state-based robotic control, and robotic control from sensory observations. We find that an inference approach to these problems constitutes an efficient and practical choice, while also revealing deeper connections between meta-learning and other concepts in statistical learning
Recommended from our members
Learning and Analyzing Representations for Meta-Learning and Control
While artificial learning agents have demonstrated impressive capabilities, these successes are typically realized in narrowly defined problems and require large amounts of labeled data. Our agents struggle to leverage what they already know to generalize to new inputs and acquire new skills quickly, abilities quite natural to humans. To learn and leverage the structure present in the world, we study data-driven abstractions of states and tasks. We begin with unsupervised state representation learning, in which the goal is to learn a compact state representation that discards irrelevant information but preserves the information needed to learn the optimal policy. Surprisingly, we find that several commonly used objectives are not guaranteed to produce sufficient representations, and demonstrate that our theoretical findings are are reflected empirically in simple visual RL domains.Next, we turn to learning abstractions of tasks, a problem typically studied as meta-learning. Meta-learning is an approach to endow artificial agents with this capability that leverages a set of related training tasks to learn an adaptation mechanism that can be used to acquire new skills from little supervision. We adopt an inference perspective that casts meta-learning as learning probabilistic task representations, framing the problem of learning to learn as learning to infer hidden task variables from experience. Leveraging this viewpoint, we propose meta-learning algorithms for diverse applications: image segmentation, state-based robotic control, and robotic control from sensory observations. We find that an inference approach to these problems constitutes an efficient and practical choice, while also revealing deeper connections between meta-learning and other concepts in statistical learning